764 research outputs found

    From Health Decisions to Performance: The Network Effect

    Get PDF
    This study uses a simulation as a vehicle for social networks research application. Eighty five companies were created as partof a simulated healthcare industry. Our results suggest that companies positioning themselves at pivotal points within thenetwork outperform companies that do not. The findings show the applicability of network theory and the use of simulationsin applied research on healthcare decision-making

    Adiabatic theorem for non-hermitian time-dependent open systems

    Full text link
    In the conventional quantum mechanics (i.e., hermitian QM) the adia- batic theorem for systems subjected to time periodic fields holds only for bound systems and not for open ones (where ionization and dissociation take place) [D. W. Hone, R. Ketzmerik, and W. Kohn, Phys. Rev. A 56, 4045 (1997)]. Here with the help of the (t,t') formalism combined with the complex scaling method we derive an adiabatic theorem for open systems and provide an analytical criteria for the validity of the adiabatic limit. The use of the complex scaling transformation plays a key role in our derivation. As a numerical example we apply the adiabatic theorem we derived to a 1D model Hamiltonian of Xe atom which interacts with strong, monochromatic sine-square laser pulses. We show that the gener- ation of odd-order harmonics and the absence of hyper-Raman lines, even when the pulses are extremely short, can be explained with the help of the adiabatic theorem we derived

    Entropy, Optimization and Counting

    Full text link
    In this paper we study the problem of computing max-entropy distributions over a discrete set of objects subject to observed marginals. Interest in such distributions arises due to their applicability in areas such as statistical physics, economics, biology, information theory, machine learning, combinatorics and, more recently, approximation algorithms. A key difficulty in computing max-entropy distributions has been to show that they have polynomially-sized descriptions. We show that such descriptions exist under general conditions. Subsequently, we show how algorithms for (approximately) counting the underlying discrete set can be translated into efficient algorithms to (approximately) compute max-entropy distributions. In the reverse direction, we show how access to algorithms that compute max-entropy distributions can be used to count, which establishes an equivalence between counting and computing max-entropy distributions

    DDGun: An untrained method for the prediction of protein stability changes upon single and multiple point variations

    Get PDF
    Background: Predicting the effect of single point variations on protein stability constitutes a crucial step toward understanding the relationship between protein structure and function. To this end, several methods have been developed to predict changes in the Gibbs free energy of unfolding (\u3b4\u3b4G) between wild type and variant proteins, using sequence and structure information. Most of the available methods however do not exhibit the anti-symmetric prediction property, which guarantees that the predicted \u3b4\u3b4G value for a variation is the exact opposite of that predicted for the reverse variation, i.e., \u3b4\u3b4G(A \u2192 B) = -\u3b4\u3b4G(B \u2192 A), where A and B are amino acids. Results: Here we introduce simple anti-symmetric features, based on evolutionary information, which are combined to define an untrained method, DDGun (DDG untrained). DDGun is a simple approach based on evolutionary information that predicts the \u3b4\u3b4G for single and multiple variations from sequence and structure information (DDGun3D). Our method achieves remarkable performance without any training on the experimental datasets, reaching Pearson correlation coefficients between predicted and measured \u3b4\u3b4G values of ~ 0.5 and ~ 0.4 for single and multiple site variations, respectively. Surprisingly, DDGun performances are comparable with those of state of the art methods. DDGun also naturally predicts multiple site variations, thereby defining a benchmark method for both single site and multiple site predictors. DDGun is anti-symmetric by construction predicting the value of the \u3b4\u3b4G of a reciprocal variation as almost equal (depending on the sequence profile) to -\u3b4\u3b4G of the direct variation. This is a valuable property that is missing in the majority of the methods. Conclusions: Evolutionary information alone combined in an untrained method can achieve remarkably high performances in the prediction of \u3b4\u3b4G upon protein mutation. Non-trained approaches like DDGun represent a valid benchmark both for scoring the predictive power of the individual features and for assessing the learning capability of supervised methods

    Marine Infectious Disease Dynamics and Outbreak Thresholds: Contact Transmission, Pandemic Infection, and the Potential Role of Filter Feeders

    Get PDF
    Disease-causing organisms can have significant impacts on marine species and communities. However, the dynamics that underlie the emergence of disease outbreaks in marine ecosystems still lack the equivalent level of description, conceptual understanding, and modeling context routinely present in the terrestrial systems. Here, we propose a theoretical basis for modeling the transmission of marine infectious diseases (MIDs) developed from simple models of the spread of infectious disease. The models represent the dynamics of a variety of host-pathogen systems including those unique to marine systems where transmission of disease is by contact with waterborne pathogens both directly and through filter-feeding processes. Overall, the analysis of the epizootiological models focused on the most relevant processes that interact to drive the initiation and termination of epizootics. A priori, systems with multi-step disease infections (e.g., infection-death-particle release-filtration-transmission) reduced dependence on individual parameters resulting in inherently slower transmissions rates. This is demonstrably not the case; thus, these alternative transmission pathways must also considerably increase the rates of processes involved in transmission. Scavengers removing dead infected animals may inhibit disease spread in both contact-based and waterborne pathogen-based diseases. The capacity of highly infected animals, both alive and dead, to release a substantial number of infective elements into the water column, making them available to suspension feeders results in such diseases being highly infective with a very small low-abundance refuge . In these systems, the body burden of pathogens and the relative importance between the release and the removal rate of pathogens in the host tissue or water column becomes paramount. Two processes are of potential consequence inhibiting epizootics. First, large water volumes above the benthic susceptible populations can function as a sink for pathogens. Second, unlike contact-based disease models in which an increase in the number of susceptible individuals in the population increases the likelihood of transmission and epizootic development, large populations of filter feeders can reduce this likelihood through the overfiltration of infective particles

    Algorithm Engineering in Robust Optimization

    Full text link
    Robust optimization is a young and emerging field of research having received a considerable increase of interest over the last decade. In this paper, we argue that the the algorithm engineering methodology fits very well to the field of robust optimization and yields a rewarding new perspective on both the current state of research and open research directions. To this end we go through the algorithm engineering cycle of design and analysis of concepts, development and implementation of algorithms, and theoretical and experimental evaluation. We show that many ideas of algorithm engineering have already been applied in publications on robust optimization. Most work on robust optimization is devoted to analysis of the concepts and the development of algorithms, some papers deal with the evaluation of a particular concept in case studies, and work on comparison of concepts just starts. What is still a drawback in many papers on robustness is the missing link to include the results of the experiments again in the design

    Extended Formulations in Mixed-integer Convex Programming

    Full text link
    We present a unifying framework for generating extended formulations for the polyhedral outer approximations used in algorithms for mixed-integer convex programming (MICP). Extended formulations lead to fewer iterations of outer approximation algorithms and generally faster solution times. First, we observe that all MICP instances from the MINLPLIB2 benchmark library are conic representable with standard symmetric and nonsymmetric cones. Conic reformulations are shown to be effective extended formulations themselves because they encode separability structure. For mixed-integer conic-representable problems, we provide the first outer approximation algorithm with finite-time convergence guarantees, opening a path for the use of conic solvers for continuous relaxations. We then connect the popular modeling framework of disciplined convex programming (DCP) to the existence of extended formulations independent of conic representability. We present evidence that our approach can yield significant gains in practice, with the solution of a number of open instances from the MINLPLIB2 benchmark library.Comment: To be presented at IPCO 201

    A second order cone formulation of continuous CTA model

    Get PDF
    The final publication is available at link.springer.comIn this paper we consider a minimum distance Controlled Tabular Adjustment (CTA) model for statistical disclosure limitation (control) of tabular data. The goal of the CTA model is to find the closest safe table to some original tabular data set that contains sensitive information. The measure of closeness is usually measured using l1 or l2 norm; with each measure having its advantages and disadvantages. Recently, in [4] a regularization of the l1 -CTA using Pseudo-Huber func- tion was introduced in an attempt to combine positive characteristics of both l1 -CTA and l2 -CTA. All three models can be solved using appro- priate versions of Interior-Point Methods (IPM). It is known that IPM in general works better on well structured problems such as conic op- timization problems, thus, reformulation of these CTA models as conic optimization problem may be advantageous. We present reformulation of Pseudo-Huber-CTA, and l1 -CTA as Second-Order Cone (SOC) op- timization problems and test the validity of the approach on the small example of two-dimensional tabular data set.Peer ReviewedPostprint (author's final draft
    corecore